Blog Social

AI is the new digital ad industry… but the stakes are far greater

Cover image for AI is the new digital ad industry… but the stakes are far greater

Photo: Conny Schneider

Photo of Hanna Kahlert
by Hanna Kahlert

Zoom has been making the news this week over its controversial new terms and conditions that now allow the service (in theory) to utilise user data to train its AI-assisted features. While the company has reassured the public that they need to opt into the usage, which is necessary to use its AI functions like generating transcripts, privacy concerns are still high.

This comes at the same time as OpenAI’s new GPTBot, which is a new opt-out web scraper intended to train future AIs. As MIDiA identified in April, the new competition in the field of AI will revolve around datasets. The company with the ‘cleanest’ and highest-quality data can create the best products fit for commercial use and higher-quality content generation.

The Zoom discussion raises concern for both the users of the platform and of a very near future (if not existing present) where any platform or service that is not end-to-end encrypted can simply change its terms and conditions to include AI training. This raises concerns for, well, everyone – but may be an unavoidable future.

It all comes back to data as the most valuable currency of the internet. This is how the digital ad industry skyrocketed to being one of the most lucrative businesses in the world – and why companies like Meta, which has always dealt in data, have become so powerful. AI is the new data use-case, but unlike ads, AI programmes are products in their own right. They can create content, de-skill jobs (creative and administrative, as per the ongoing Hollywood strikes), justify the reduction of team sizes and salaries, and plagiarise without repercussions given their ‘black box’ behind-the-scenes secrecy.

Copyright is also a hotbed issue because of the inability to trace output to the original works used. The proliferation of these new in-app AI services, especially on the likes of Zoom where NDA-level business is conducted, doctors hold appointments, and other privacy-critical interactions happen, means that simple user errors like not reading the T&Cs can result in massive privacy breaches and misuse of data. Of course, long-term this becomes a regulatory and company policy issue, but in the near-term grey zone is likely to result in slip-ups – and this could result in knock-on issues nevertheless. After all, once the data is taken and used, it cannot be cleaned from the larger model which trains on it.

Of course, ethical use of data and greater prominence of end-to-end encryption will be likely adjustments in the broader market. Greater protective measures for creators are (likely) inevitable as well. These new AIs can offer new revenue opportunities for companies, and not all threats will be realised. But this future of AI everything is inevitable – if it is not here already.

The discussion around this post has not yet got started, be the first to add an opinion.

Newsletter

Trending

Add your comment